Goto

Collaborating Authors

 explainable link prediction


xLP: Explainable Link Prediction for Master Data Management

Ganesan, Balaji, Pasha, Matheen Ahmed, Parkala, Srinivasa, Singh, Neeraj R, Mishra, Gayatri, Bhatia, Sumit, Patel, Hima, Naganna, Somashekar, Mehta, Sameep

arXiv.org Artificial Intelligence

Explaining neural model predictions to users requires creativity. Especially in enterprise applications, where there are costs associated with users' time, and their trust in the model predictions is critical for adoption. For link prediction in master data management, we have built a number of explainability solutions drawing from research in interpretability, fact verification, path ranking, neuro-symbolic reasoning and self-explaining AI. In this demo, we present explanations for link prediction in a creative way, to allow users to choose explanations they are more comfortable with.


Explainable Link Prediction for Privacy-Preserving Contact Tracing

Ganesan, Balaji, Patel, Hima, Mehta, Sameep

arXiv.org Artificial Intelligence

Contact Tracing has been used to identify people who were in close proximity to those infected with SARS-Cov2 coronavirus. A number of digital contract tracing applications have been introduced to facilitate or complement physical contact tracing. However, there are a number of privacy issues in the implementation of contract tracing applications, which make people reluctant to install or update their infection status on these applications. In this concept paper, we present ideas from Graph Neural Networks and explainability, that could improve trust in these applications, and encourage adoption by people.